56 research outputs found

    Efficient Routing Protocol in Delay Tolerant Networks (DTNs)

    Get PDF
    Modern Internet protocols demonstrate inefficient performance in those networks where the connectivity between end nodes has intermittent property due to dynamic topology or resource constraints. Network environments where the nodes are characterized by opportunistic connectivity are referred to as Delay Tolerant Networks (DTNs). Highly usable in numerous practical applications such as low-density mobile ad hoc networks, command/response military networks and wireless sensor networks, DTNs have been one of the growing topics of interest characterized by significant amount of research efforts invested in this area over the past decade. Routing is one of the major components significantly affecting the overall performance of DTN networks in terms of resource consumption, data delivery and latency. Over the past few years a number of routing protocols have been proposed. The focus of this thesis is on description, classification and comparison of these protocols. We discuss the state-of-the-art routing schemes and methods in opportunistic networks and classify them into two main deterministic and stochastic routing categories. The classification is based on forwarding decisions in routing methods adopted with or without the knowledge about the network topology and nodes trajectories. The protocols in each class have their own advantages and shortcomings. In the stochastic routing protocols category, simple flooding-based protocols are feasible approaches in those networks where there is a little or no information about the network topology and there is no resource restriction. Epidemic routing is a flooding- based protocol relying upon the distribution of messages through the networks to deliver information to their destinations. To demonstrate the performance of the epidemic routing protocol for information delivery in networks with intermittent connectivities, we provide several simulation experiments and show that this protocol with reasonable aggregate resource consumption, ensures eventual message delivery in networks, using minimal assumptions regarding nodes trajectories, network topology and connectivity of underlying networks and only based on sufficient number of random pair-wise exchanges of messages among mobile nodes. In the following, we introduce the recently proposed network coding concept and discuss coding-based information delivery advantages in wireless networks. Network coding is a recently introduced paradigm to efficiently disseminate data in wireless networks in which data flows coming from multiple sources are combined to increase throughput, reduce delay, and enhance robustness against node failures. Finally, we present some simulation experiments to show the superiority of network coding for information delivery in wireless networks, compared to pure flooding-based mechanisms. /Kir1

    GeoAnnotator: A Collaborative Semi-Automatic Platform for Constructing Geo-Annotated Text Corpora

    Get PDF
    Ground-truth datasets are essential for the training and evaluation of any automated algorithm. As such, gold-standard annotated corpora underlie most advances in natural language processing (NLP). However, only a few relatively small (geo-)annotated datasets are available for geoparsing, i.e., the automatic recognition and geolocation of place references in unstructured text. The creation of geoparsing corpora that include both the recognition of place names in text and matching of those names to toponyms in a geographic gazetteer (a process we call geo-annotation), is a laborious, time-consuming and expensive task. The field lacks efficient geo-annotation tools to support corpus building and lacks design guidelines for the development of such tools. Here, we present the iterative design of GeoAnnotator, a web-based, semi-automatic and collaborative visual analytics platform for geo-annotation. GeoAnnotator facilitates collaborative, multi-annotator creation of large corpora of geo-annotated text by generating computationally-generated pre-annotations that can be improved by human-annotator users. The resulting corpora can be used in improving and benchmarking geoparsing algorithms as well as various other spatial language-related methods. Further, the iterative design process and the resulting design decisions can be used in annotation platforms tailored for other application domains of NLP

    Utilizing ICN/CCN for service and VM migration support in virtualized LTE systems

    Get PDF
    One of the most important concepts used in mobile networks, like LTE (Long Term Evolution) is service continuity. A mobile user moving from one network to another network should not lose an on-going service. In cloud-based (virtualized) LTE systems, services are hosted on Virtual Machines (VMs) that can be moved and migrated across multiple networks to such locations where these services can be well delivered to mobile users. The migration of the (1) VMs and (2) the services running on such VMs, should happen in such a way that the disruption of an on-going service is minimized. In this paper we argue that a technology that can efficiently be used for supporting service and VM migration is the ICN/CCN (Information Centric Networking / Content Centric Networking) technology

    Deep Learning on SAR Imagery: Transfer Learning Versus Randomly Initialized Weights

    Full text link
    Deploying deep learning on Synthetic Aperture Radar (SAR) data is becoming more common for mapping purposes. One such case is sea ice, which is highly dynamic and rapidly changes as a result of the combined effect of wind, temperature, and ocean currents. Therefore, frequent mapping of sea ice is necessary to ensure safe marine navigation. However, there is a general shortage of expert-labeled data to train deep learning algorithms. Fine-tuning a pre-trained model on SAR imagery is a potential solution. In this paper, we compare the performance of deep learning models trained from scratch using randomly initialized weights against pre-trained models that we fine-tune for this purpose. Our results show that pre-trained models lead to better results, especially on test samples from the melt season

    Applying SDN/OpenFlow in Virtualized LTE to support Distributed Mobility Management (DMM)

    Get PDF
    Distributed Mobility Management (DMM) is a mobility management solution, where the mobility anchors are distributed instead of being centralized. The use of DMM can be applied in cloud-based (virtualized) Long Term Evolution (LTE) mobile network environments to (1) provide session continuity to users across personal, local, and wide area networks without interruption and (2) support traffic redirection when a virtualized LTE entity like a virtualized Packet Data Network Gateway (P-GW) running on an virtualization platform is migrated to another virtualization platform and the on-going sessions supported by this P-GW need to be maintained. In this paper we argue that the enabling technology that can efficiently be used for supporting DMM in virtualized LTE systems is the Software Defined Networking (SDN)/OpenFlow technology

    Comparison of Cross-Entropy, Dice, and Focal Loss for Sea Ice Type Segmentation

    Full text link
    Up-to-date sea ice charts are crucial for safer navigation in ice-infested waters. Recently, Convolutional Neural Network (CNN) models show the potential to accelerate the generation of ice maps for large regions. However, results from CNN models still need to undergo scrutiny as higher metrics performance not always translate to adequate outputs. Sea ice type classes are imbalanced, requiring special treatment during training. We evaluate how three different loss functions, some developed for imbalanced class problems, affect the performance of CNN models trained to predict the dominant ice type in Sentinel-1 images. Despite the fact that Dice and Focal loss produce higher metrics, results from cross-entropy seem generally more physically consistent

    The Validity, Generalizability and Feasibility of Summative Evaluation Methods in Visual Analytics

    Full text link
    Many evaluation methods have been used to assess the usefulness of Visual Analytics (VA) solutions. These methods stem from a variety of origins with different assumptions and goals, which cause confusion about their proofing capabilities. Moreover, the lack of discussion about the evaluation processes may limit our potential to develop new evaluation methods specialized for VA. In this paper, we present an analysis of evaluation methods that have been used to summatively evaluate VA solutions. We provide a survey and taxonomy of the evaluation methods that have appeared in the VAST literature in the past two years. We then analyze these methods in terms of validity and generalizability of their findings, as well as the feasibility of using them. We propose a new metric called summative quality to compare evaluation methods according to their ability to prove usefulness, and make recommendations for selecting evaluation methods based on their summative quality in the VA domain.Comment: IEEE VIS (VAST) 201

    City-level Geolocation of Tweets for Real-time Visual Analytics

    Full text link
    Real-time tweets can provide useful information on evolving events and situations. Geotagged tweets are especially useful, as they indicate the location of origin and provide geographic context. However, only a small portion of tweets are geotagged, limiting their use for situational awareness. In this paper, we adapt, improve, and evaluate a state-of-the-art deep learning model for city-level geolocation prediction, and integrate it with a visual analytics system tailored for real-time situational awareness. We provide computational evaluations to demonstrate the superiority and utility of our geolocation prediction model within an interactive system.Comment: 4 pages, 2 tables, 1 figure, SIGSPATIAL GeoAI Worksho
    • …
    corecore